Importance Sampling: Intrinsic Dimension and Computational Cost

نویسنده

  • S. Agapiou
چکیده

The basic idea of importance sampling is to use independent samples from a proposal measure in order to approximate expectations with respect to a target measure. It is key to understand how many samples are required in order to guarantee accurate approximations. Intuitively, some notion of distance between the target and the proposal should determine the computational cost of the method. A major challenge is to quantify this distance in terms of parameters or statistics that are pertinent for the practitioner. The subject has attracted substantial interest from within a variety of communities. The objective of this paper is to overview and unify the resulting literature by creating an overarching framework. A general theory is presented, with a focus on the use of importance sampling in Bayesian inverse problems and filtering.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Importance Sampling: Computational Complexity and Intrinsic Dimension

Abstract: The basic idea of importance sampling is to use independent samples from one measure in order to approximate expectations with respect to another measure. Understanding how many samples are needed is key to understanding the computational complexity of the method, and hence to understanding when it will be effective and when it will not. It is intuitive that the size of the difference...

متن کامل

Efficient importance sampling for ML estimation of SCD models

The evaluation of the likelihood function of the stochastic conditional duration model requires to compute an integral that has the dimension of the sample size. We apply the efficient importance sampling method for computing this integral. We compare EIS-based ML estimation with QML estimation based on the Kalman filter. We find that EIS-ML estimation is more precise statistically, at a cost o...

متن کامل

Lipschitz Density-Ratios, Structured Data, and Data-driven Tuning

Density-ratio estimation (i.e. estimating f = fQ/fP for two unknown distributions Q and P ) has proved useful in many Machine Learning tasks, e.g., risk-calibration in transfer-learning, two-sample tests, and also useful in common techniques such importance sampling and bias correction. While there are many important analyses of this estimation problem, the present paper derives convergence rat...

متن کامل

Enhanced Conformational Sampling Using Replica Exchange with Concurrent Solute Scaling and Hamiltonian Biasing Realized in One Dimension

Replica exchange (REX) is a powerful computational tool for overcoming the quasi-ergodic sampling problem of complex molecular systems. Recently, several multidimensional extensions of this method have been developed to realize exchanges in both temperature and biasing potential space or the use of multiple biasing potentials to improve sampling efficiency. However, increased computational cost...

متن کامل

Cost-Driven Multiple Importance Sampling for Monte-Carlo Rendering

The global illumination or transport problems can also be considered as a sequence of integrals, while its MonteCarlo solutions as different sampling techniques. Multiple importance sampling takes advantage of different sampling strategies and combines the results obtained with them. In this paper we propose the combination of very different global illumination algorithms in a way that their st...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2017